MiniMax Entropy and Maximum Likelihood Complementarity of Tasks, Identity of Solutions
نویسندگان
چکیده
Concept of exponential family is generalized by simple and general exponential form. Simple and general potential are introduced. Maximum Entropy and Maximum Likelihood tasks are defined. ML task on the simple exponential form and ME task on the simple potentials are proved to be complementary in setup and identical in solutions. ML task on the general exponential form and ME task on the general potentials are weakly complementary, leading to the same necessary conditions. A hypothesis about complementarity of ML and MiniMax Entropy tasks and identity of their solutions, brought up by a special case analytical as well as several numerical investigations, is suggested in this case. MiniMax Ent can be viewed as a generalization of MaxEnt for parametric linear inverse problems, and its complementarity with ML as yet another argument in favor of Shannon’s entropy criterion.
منابع مشابه
Estimating a Bounded Normal Mean Relative to Squared Error Loss Function
Let be a random sample from a normal distribution with unknown mean and known variance The usual estimator of the mean, i.e., sample mean is the maximum likelihood estimator which under squared error loss function is minimax and admissible estimator. In many practical situations, is known in advance to lie in an interval, say for some In this case, the maximum likelihood estimator...
متن کاملWhy Maximum Entropy? A Non-axiomatic Approach
Ill-posed inverse problems of the form y = Xp where y is J-dimensional vector of a data, p is m-dimensional probability vector which can not be measured directly and matrix X of observable variables is a known J ×m matrix, J < m, are frequently solved by Shannon’s entropy maximization (MaxEnt, ME). Several axiomatizations were proposed (see for instance [1], [2], [3], [4], [5], [6], [7], [8], a...
متن کاملLearning in Gibbsian Fields: How Accurate and How Fast Can It Be?
ÐGibbsian fields or Markov random fields are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters [22]. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that ...
متن کاملDetermination of Maximum Bayesian Entropy Probability Distribution
In this paper, we consider the determination methods of maximum entropy multivariate distributions with given prior under the constraints, that the marginal distributions or the marginals and covariance matrix are prescribed. Next, some numerical solutions are considered for the cases of unavailable closed form of solutions. Finally, these methods are illustrated via some numerical examples.
متن کاملMinimax Estimation of Discrete Distributions under $\ell_1$ Loss
We consider the problem of discrete distribution estimation under l1 loss. We provide tight upper and lower bounds on the maximum risk of the empirical distribution (the maximum likelihood estimator), and the minimax risk in regimes where the support size S may grow with the number of observations n. We show that among distributions with bounded entropy H , the asymptotic maximum risk for the e...
متن کامل